Introduction
Accountability
and Assessment
Student
Learning Outcomes
The
Role of Assessment
Toward
a Curriculum Based on Student Learning
Conclusions
Recommendations |
ASSESSMENT,
ACCOUNTABILITY, AND STUDENT LEARNING OUTCOMES
Richard Frye
INTRODUCTIONIn the last fifteen
years two trends have gained prominence throughout higher education:
assessment and accountability. For various historical reasons, and
the source of considerable confusion, both are erroneously referred
to as "assessment." The first, "assessment for excellence," is an
information feedback process to guide individual students, faculty
members, programs, and schools in improving their effectiveness.
Assessment instruments are designed to answer a wide range of
self-evaluative questions related to one larger question: how well
are we accomplishing our mission?
The second trend,
"assessment for accountability," is essentially a regulatory
process, designed to assure institutional conformity to specified
norms. Accountability advocates, including especially state
legisla-tures, to a considerable extent view colleges as factories
and higher education as a production process (Astin, 1993, p.17),
although there is widespread disagreement about what exactly they
are supposed to produce, and about how to measure it (Ewell, 1997).
Nevertheless, various performance measures, which attempt to measure
institutional effectiveness, particularly with regard to fiscal
efficiency and resource productivity, have been created and applied
to public universities and colleges throughout the country.
Although the terms
"assessment" and "accountability" are often used interchangeably,
they have important differences. In general, when we assess our own
performance, it's assessment; when others assess our performance,
it's accountability. That is, assessment is a set of initiatives we
take to monitor the results of our actions and improve ourselves;
accountability is a set of initiatives others take to monitor the
results of our actions, and to penalize or reward us based on the
outcomes. They have very different flavors. Although assessment
efforts over the past dozen years have been largely focused on
aggregate statistics for entire schools, accreditation review boards
recently have been increasing pressure on institutions to actively
engage departments and students in the assessment-learning- change
cycle (Gentemann, 1994). If learning is our business, how well are
we doing at all levels (assessment), and how can we demonstrate that
to others (accountability)?
This increasing focus
on assessment and accountability has powered a shift away from
prestige-based concepts of institutional excellence, in which size
of endowments, accomplishments or credentials of faculty, or types
of programs, for example, were as-sumed to be indicators of
institutional quality or effectiveness, and also away from
curriculum-based models that emphasize what is presented, toward
learning-based models which emphasize what students know and can
actually do. The emerging measure of institutional excellence is how
well institutions develop student talents and abilities, i.e.,
student learning outcomes (Astin, 1985, 1993, 1998). The purpose of
this paper is to provide an introduction to some of the
relationships among assessment, accountability, and student
learning, and to inform a discussion of these issues in the Western
community. Section I describes in more detail the relationships
between assessment and accountability; Section II discusses some of
the current thinking about student learning and how to improve it;
Section III discusses the role of assessment in improving student
learning, and provides some examples; Section IV suggests possible
directions here at Western for shifting institutional focus to
student learning, and offers two recommendations.
Top
of Document
I.
ACCOUNTABILITY AND ASSESSMENT
A. ASSESSMENT FOR
EXCELLENCE"Assessment is not an
end in itself but a vehicle for educational improvement." (AAHE,
1992). As shown in Fig-ure 1, given the attributes of entering
students, measurement of an array of student outcomes provides
feedback about how well individual courses, programs, and the
university as a whole are accomplishing their stated missions and
goals. Assessment aims at the continuing improvement of student
development, and is generally consistent with a "value-added"
concept of education; note that the rationale for having better
programs is to ensure better student outcomes.
As shown in Figure 1,
the collection of assessment information is only the first step in a
four-part process. To be useful, it must be analyzed and reflected
upon by appropriate decision makers, and then used to design and
apply changes. In each iterative cycle, modified programs are then
reassessed and readjusted, continually improving effectiveness.
Even at the
departmental level, new guidelines for program reviews are shifting
the focus away from a preoccupation with departmental assets or
curricular structure and more toward "how resources are used, the
consequences of these uses, and the way in which students actually
experience the major" (Gentemann, 1994).
Western's recent
accreditation review indicated that the Office of Institutional
Assessment and Testing (OIAT) must do more to deliver useful
assessment information to academic units, and individual academic
units must do more to integrate assessment practices into their
programs. The bold arrows in Figure 1 show the current flow of
assessment data at Western; the dotted lines show the parts of the
feedback loop which need further development. These are discussed in
more detail in Section III.
Top
of Document
B. ASSESSMENT FOR
ACCOUNTABILITYAccountability measures
are an attempt to assert more direct public control over higher
education, as shown in Figure 2. They are primarily concerned with
resource allocation and fiscal efficiency. While it is completely
appropriate for those who pay the bills--taxpayers, parents, and
students--to evaluate critically what they get for their money from
public education, performance measures as they are currently defined
in Washington State remain problematical, for at least two reasons.
First, because they are
measured on arbitrary scales, their meanings are ambiguous. Second,
the measures themselves direct institutional goals to some extent,
rather than the other way around. Resulting University policy is
driven to achieve specific measurement targets, and these may be at
odds with the University's larger mission and goals, including the
enhancement of student learning.
Two performance
measures which illustrate this point are fall to fall retention of
students and the graduation efficiency index; both are commonly
regarded as measures to be maximized. The rationale is that for the
sake of fiscal efficiency, a student should enter school, stay
enrolled, take only the courses necessary to graduate, and then
leave as soon as possible to make room for another student. This
kind of thinking assumes a factory model of education, in which the
measure of output is degree attainment, and the measure of cost is
time to degree.
Such a view penalizes
institutions for various kinds of normal student behavior which make
the numbers look bad, but which might serve students and their
educations very well--like taking a double major or taking elective
courses irrelevant to the major. Incentives are created for
institutions to eliminate these students, to narrow their
educational options, or to encourage them to go elsewhere for their
educations, all questionable goals from the standpoint of student
learning.
Assessment derives its
legitimacy from the quality of its measurements; and those being
measured generally best know the area being assessed. University
mission statements ought to be the place to find out what is
important, and therefore what should be measured. Since student
learning figures prominently in most academic mission statements,
student learning outcomes may have special appeal as performance
measures.
C. THE ROLE OF
TECHNOLOGYThe
assessment-accountability waters are further muddied by developments
in academic technology. Manufacturers of computer software and
hardware have been heavily lobbying both legislators and academics
nationwide to substitute electronic and media technologies, such as
web-based distance learning, for more traditional, face-to- face
educational practices (Jacklet, 1998).
At present there is no
reason to suppose that these computer technologies will necessarily
either improve learning or lower costs. Although there are certainly
ways in which such technologies can be applied effectively to
increase either faculty productivity or student learning, or both
(Chickering & Ehrmann, 1998), technology can never entirely
replace the face-to-face interactions among students and between
students and faculty which have shown demonstrated importance in
student development (Astin, 1993, 1998).
Nevertheless, such
lobbying is a persuasive distraction at all levels. Using student
learning outcomes and faculty productivity as the measures of
effectiveness of educational systems in general and of new
technologies in particular would help to assure that only those
technologies which are both cost-effective and learning- effective
be adopted.
Top
of Document
D. RECONCILING
ASSESSMENT AND ACCOUNTABILITYStudent learning
transcends facts and concepts, and includes the values, attitudes,
self-concepts, and world views students evolve in the interactive
intellectual and social environment which colleges foster. Grounding
accountability in student learning, with measures designed by the
units being measured, would provide the most rational basis to
measure university performance.
Accountability aims at
improving fiscal efficiency, but is blind to issues of educational
quality. Assessment aims at improving the quality of education, but
is necessarily constrained by budgets. A focus on student learning
outcomes can be a bridge that links the two.
Effective and useful
accountability measures should have two qualities. First, they must
be unambiguous, either monotonically increasing or decreasing
measures of either costs or benefits; i.e., we all agree whether we
want more less of whatever it is they measure. Second, they must be
linked in some way to indicators of quality.
It turns out that
student learning outcomes constitute useful measures of quality in
and of themselves. They are consistent with the stated missions of
higher education; improving them is a valid indicator of improved
institutional performance. Such indicators, when combined with cost
data, could also be used effectively as measures of changes in
institutional fiscal efficiency or overall performance over time.
Performance measures
based on student learning outcomes would be unambiguous; they would
tell us whether institutions are providing the same levels of
learning at lower cost, or providing improved levels of learning for
the same cost. Either type of measure satisfies the two criteria for
performance indicators, and gets more directly at the tension
between assessment and accountability: minimizing the cost of a
truly excellent education.
II. STUDENT
LEARNING OUTCOMESSharpening the focus of
higher education onto student learning outcomes goes beyond mere
tinkering with traditional structures and methods; it really
constitutes a paradigm shift in educational philosophy and practice.
An increasingly accepted view among educational scholars is that
traditional structures are dysfunctional and overdue for change
(Miller, 1998). To remedy this, "students and their learning should
become the focus of everything we do. . .from the instruction that
we provide, to the intellectual climate that we create, to the
policy decisions that we make" (Cross, 1998).
At this point, it is
useful to make some distinctions between "student outcomes" and
"student learning outcomes." Student outcomes generally refer to
aggregate statistics on groups of students, like graduation rates,
retention rates, transfer rates, and employment rates for an
entering class or a graduating class. These "student outcomes" are
actually institutional outcomes; they attempt to measure comparative
institutional performance, not changes in students themselves due to
their college experience. They have generally been associated with
accountability reporting.
Unfortunately,
student-outcomes statistics are often "output- only" measures
(Astin, 1993). That is, they are computed without regard to incoming
student differences and without regard to how different students
experienced the college environment. As a result, they do not
distinguish how much an observed measurement is the product of the
institution and its programs on students, and how much is due to
other factors, such as socioeconomic status, general intelligence,
or which high school was attended, for example, and can therefore be
misleading.
"Student learning
outcomes," on the other hand, encompass a wide range of student
attributes and abilities, both cognitive and affective, which are a
measure of how their college experiences have supported their
development as individuals. Cognitive outcomes include demonstrable
acquisition of specific knowledge and skills, as in a major; what do
students know that they didn't know before, and what can they do
that they couldn't do before? Affective outcomes are also of
considerable interest; how has their college experience impacted
students' values, goals, attitudes, self-concepts, world views, and
behaviors? How has it developed their many potentials? How has it
enhanced their value to themselves, their families, and their
communities?
There are essentially
three threads which must be interwoven into a program dedicated to
the improvement of student learning: shifting curricular focus to
student learning; developing faculty as effective teachers; and the
integration of assessment into curriculum at several levels. These
are discussed in some detail in the next several sections.
Top
of Document
A. SHIFTING CURRICULAR
FOCUSThere are thousands of
articles and hundreds of books on student learning; fortunately,
several scholars have painstakingly sifted through this material and
summarized important conclusions on which the studies are in general
agreement. Perhaps the best known is Chickering and Gamson's "Seven
Principles for Good Practice in Higher Education" (1987). The Seven
Principles provide a useful introduction to the thinking behind a
learning-based approach to higher education, and are listed below
(this annotated version is adapted from Ehrmann & Chickering,
1998):
1. Good Practice
Encourages Contacts Between Students and Faculty Frequent
student-faculty contact in and out of class is a most important
factor in student motivation and involvement. Faculty concern helps
students get through rough times and keep on working. Knowing a few
faculty members well enhances students' intellectual commitment and
encourages them to think about their own values and plans.
2. Good Practice
Develops Reciprocity and Cooperation Among Students Learning
is enhanced when it is more like a team effort than a solo race.
Good learning, like good work, is collaborative and social, not
competitive and isolated. Working with others often increases
involvement in learning. Sharing one's ideas and responding to
others improves thinking and deepens understanding.
3. Good Practice
Uses Active Learning Techniques Learning is not a spectator
sport. Students do not learn much just sitting in classes listening
to teachers, memorizing prepackaged assignments, and spitting out
answers. They must talk about what they are learning, write
reflectively about it, relate it to past experiences, and apply it
to their daily lives. They must make what they learn part of
themselves.
4. Good Practice
Gives Prompt Feedback Knowing what you know and don't know
focuses your learning. In getting started, students need help in
assessing their existing knowledge and competence. Then, in classes,
students need frequent opportunities to perform and receive feedback
on their performance. At various points during college, and at its
end, students need chances to reflect on what they have learned,
what they still need to know, and how they might assess themselves.
5. Good Practice
Emphasizes Time on Task Time plus energy equals learning.
Learning to use one's time well is critical for students and
professionals alike. Allocating realistic amounts of time means
effective learning for students and effective teaching for faculty.
6. Good Practice
Communicates High Expectations Expect more and you will get
it. High expectations are important for everyone--for the poorly
prepared, for those unwilling to exert themselves, and for the
bright and well motivated. Expecting students to perform well
becomes a self-fulfilling prophecy.
7. Good Practice
Respects Diverse Talents and Ways of Learning Many roads lead
to learning. Different students bring different talents and styles
to college. Brilliant students in a seminar might be all thumbs in a
lab or studio; students rich in hands-on experience may not do so
well with theory. Students need opportunities to show their talents
and learn in ways that work for them. Then they can be pushed to
learn in new ways that do not come so easily.
A consistent and
unifying theme throughout the Seven Principles is student
involvement--with faculty, with other students, and especially with
their studies. These points resonate with Astin's (1993)
identification of student involvement as a major factor in student
talent development; increased levels of involvement, including high
levels of cultural diversity and community service, are strongly
associated with many measures of success after college. Therefore,
one logical direction for improving student learning outcomes is to
establish policies which encourage and enhance many types of student
involvement, including academic involvement; involvement with
faculty, student peers, and mentors; and involvement in work, both
on and off campus.
These same principles
are also consistent with Marchese (1998), who has recently reviewed
at length the implications of recent developments in neuroscience,
anthropology, cognitive science, and evolutionary studies for our
understanding of human learning. These developments form the impetus
for new pedagogical approaches in higher education; they demonstrate
conclusively that student learning is a complex, personally unique,
and interactive process, and that traditional approaches have many
built-in shortcomings which can be greatly improved upon. Ewell
(1997) has condensed Marchese's discourse into a summary list of
ways to improve student learning, presented here in edited form:
By emphasizing
application and experience. Applied experiences like
internships and service learning try to break down artificial
barriers between "academic" and "real-world" practice, while
effective curricular designs foster appropriate knowledge and skills
"just in time" for concrete application in current classwork or
experience.
By having faculty
constructively model the learning process. "Apprenticeship"
models of teaching allow students to directly watch and internalize
expert practice. Such settings also assign students consequential
roles emphasizing correct practices. The demonstrable effectiveness
of undergraduate participation in faculty research is a case in
point, as are the internship or practicum components of many
existing practice disciplines.
By linking
established concepts to new situations. The best gains occur
when students are given both the conceptual "raw materials" with
which to create new applications and active cues about how to put
them together. For such approaches to work as advertised, though,
students must do the work themselves and faculty must assiduously
avoid "telling" them how to make these linkages.
By stimulating
interpersonal collaboration. Research findings on
collaborative learning are over-whelmingly positive, with instances
of effective practice ranging from within-class study groups to
cross-curricular learning communities.
By providing rich
and frequent feedback on performance. How students are
assessed powerfully affects how they study and learn. Managing the
frequency and consequences of such assessments, by using weekly
quizzes or ungraded practice assignments, for instance, creates
iterative opportunities for students to try out skills, to examine
small failures, and to receive advice about how to correct them.
By consistently
developing a limited set of clearly identified, cross-disciplinary
skills that are publicly held to be important. Intentional
and integrated "learning plans" can affect learning powerfully.
Needed integration must be both "horizontal" (emphasizing the
application of key skills in different contexts) and "vertical"
(fostering sequential vectors of development) to be effective. And
both depend critically on making collective campus commitments about
what should be learned in the first place. If the Seven Principles
to a large extent emphasize the importance of different kinds of
student involvement to enhance learning, four additional principles
of curricular design emerge from these recent discoveries about
human learning: 1) Learning is enhanced by engaging the natural
learning functions of the brain, which involve processes of
incremental and sequential integration. Students form their own
meanings from their interactive experiences with new information, in
ways that are per-sonally unique. What works for one student may not
work for another; 2) An appropriate and continuing level of
challenge stimulates student participation and learning. Too much or
too little discourages interest; 3) Assessment procedures which
provide frequent feedback are an important part of learning.
Entrenched practices of midterm, final, and term paper--or less--may
serve faculty as evaluative tools, but deprive students of the rich
learning engendered by ongoing assessment and feedback practices;
and 4) Ideas must be put into practice and experienced in personal
ways for students to embody and deepen their learning. Teaching
methods which emphasize application, such as internships, service
learning, experiential education, apprenticeships, research, and
other practices all help to transfer abstract learning into concrete
and measurable skills.
Top
of Document
B. ENHANCING TEACHING
EXCELLENCEAn increased emphasis
on student learning will have major impacts on the structure and
practice of teaching. An institutional commitment to student
learning could give faculty significantly increased responsibility
for real teaching excellence. Emphasis on learning demands a
different kind of teacher, and a different kind of teaching, from
the traditional model; it may no longer be enough that college
teachers are competent in their disciplines; they are likely to be
increasingly called upon to create, develop, and manage stimulating
learning environments, using a variety of resources, abilities, and
technologies, including assessment resources, in order to deepen and
enrich student learning.
In response to these
increasing demands, relatively more resources will be needed to
support the development of faculty. Such support could take many
forms. Western's recently created Center for Instructional
Innovation is already supporting faculty in the application of
information technologies to their courses. This Center, perhaps in
cooperation with other currently existing units, like the Faculty
Development Advisory Committee, which currently offers grants to
faculty for the development of teaching, might play an expanded role
in faculty development as teachers. Alternatively, various forms of
"out-sourcing," such as a continuing workshop series with leading
thinkers, could also stimulate faculty development as teachers.
Whatever the form, a
meaningful emphasis of student learning demands some kind of serious
program for faculty development as teachers. One excellent example
of a comprehensive support program is the Learning Re-sources Unit
(LRT) at British Columbia Institute of Tech-nology (BCIT). "(The
LRU) was established in 1988 as a key catalyst for educational
excellence at BCIT. Staffed with more than 25 instructional
designers, technical writers, editors, graphic artists and clerical
personnel, its mandate is to improve the teaching and learning
process through faculty, curriculum, and learning-skills development
initiatives." (BCIT web page.)
The LRU provides
workshops, teaching aids, and consultations with faculty for course,
syllabus, and curriculum development. It also provides faculty with
a number of confidential resources for development of teaching
skills, including instructional skills workshops, mid-term student
evaluations, videotaping, and one-on-one professional classroom
observation and feedback.
Combining findings
about learning and about teaching, a preliminary model of
institutional excellence emerges, as shown in Figure 3. Adopting the
best educational practices and structuring courses, curricula, and
university support programs to stimulate student involvement
enhances the conditions for learning and individual development.
Assessment of student learning outcomes then provides feedback which
guides further improvements in policy.
Top
of Document
III. THE ROLE
OF ASSESSMENT
A. THE ASSESSMENT
LEARNING CYCLEAssessment will be a
fundamental and integral part of any curriculum based on student
learning outcomes. Basically the same assessment learning cycle,
shown in Figure 4, takes place at the levels of the student, the
course, the program, the college, and the university as a whole.
It is worth
emphasizing: assessment is not just the measurement of learning; it
is in itself an integral part of learning. Assessment is the first
step in a continual learning cycle which includes measurement,
feedback, reflection, and change. The purpose of assessment is not
merely to gather information; the purpose of assessment is to foster
improvement. Frequent assessment of students helps them to refine
concepts and deepen their understanding; it also conveys high
expectations, which further stimulate learning. "Students
overwhelmingly reported that the single most important ingredient
for making a course effective is getting rapid response" (Wiggins,
1997).
Similarly, assessments
of faculty teaching by students and faculty development consultants
help teachers to improve their teaching and course organization.
Program assessments tell departments and curriculum committees how
well programs are meeting their objectives; and comprehensive
university-level assessments provide feedback about how effectively
university policies are contributing to the accomplishment of the
university's mission and goals.
Over several years
beginning in 1988, a group of distinguished scholars met regularly
to share ideas and experiences and to formulate principles for
assessment. Their set of "Nine Principles of Good Practice for
Assessing Student Learning," (AAHE Assessment Forum, 1992) is
patterned after the learning principles discussed above, and
clarifies the linkages between assessment and student learning:
1. The assessment of
student learning begins with educational values. We measure
what is most important to our mission and goals.
2. Assessment is
most effective when it reflects an understanding of learning as
multidimensional, integrated, and revealed in performance over
time. Learning entails not only what students know but what
they can do with what they know; it involves not only knowledge and
abilities but values, attitudes, and habits of mind that affect both
academic success and performance beyond the classroom.
3. Assessment works
best when the programs it seeks to improve have clear, explicitly
stated purposes. Assessment is a goal-oriented process.
Assessment as a process pushes a campus toward clarity about where
to aim and what standards to apply; clear, shared, implementable
goals are the cornerstone for assess-ment that is focused and
useful.
4. Assessment
requires attention to outcomes but also and equally to the
experiences that lead to those outcomes. To improve outcomes,
we need to know the curricula, teaching, and student effort that
lead to particular outcomes.
5. Assessment works
best when it is ongoing, not episodic. Though isolated,
"one-shot" assessment can be better than none, improvement is best
fostered when assessment entails a linked series of activities
undertaken over time, monitoring progress toward intended goals in a
spirit of continuous improvement.
6. Assessment
fosters wider improvement when representatives from across the
educational community are involved. Student learning is a
campus-wide responsibility; the aim over time is to involve people
from across the educational community. Assessment is not a task for
small groups of experts but a collaborative activity; its aim is
wider, better-informed attention to student learning by all parties
with a stake in its improvement.
7. Assessment makes
a difference when it begins with issues of use and illuminates
questions that people really care about. To be useful,
information must be connected to issues or questions that people
really care about. The point of assessment is not to gather data and
return "results"; it is a process that starts with the questions of
decision-makers, that involves them in the gathering and
interpreting of data, and that informs and helps guide continuous
improvement.
8. Assessment is
most likely to lead to improvement when it is part of a larger set
of conditions that promote change. Assessment alone changes
little. Its greatest contribution comes on campuses where the
quality of teaching and learning is visibly valued and worked at,
where information about learning outcomes is seen as an integral
part of decision-making
9. Through
assessment, educators meet responsibilities to students and to the
public. Our deepest obligation--to ourselves, our students,
and society--is to improve. Those to whom educators are accountable
have a corresponding obligation to support such attempts at
improvement.
Top
of Document
B. MISSIONS AND
MEASURES
The first assessment
principle above, measure things that matter, accentuates the
important link that must exist between a unit's mission and its
assessment measures. "The mission of an institution is the answer to
the question, what do you do and for whom?. . .Colleges need to be
clear about whom they serve and how they serve them, and to measure
their results to determine how well they deliver on their promises"
(Miller, 1998). Put another way, "a strong institutional mission
statement provides an invaluable starting point for assessment. .
.assessment cannot and should not take place in the absence of a
clear sense as to what matters most at the institution" (Banta,
1996). Effective accomplishment of stated goals is the most
appropriate measure of institutional performance and effectiveness.
The same principle applies to all levels of assessment.
For example, although
Western's mission statement does contain language which asserts that
Western "nurtures the intellectual, ethical, social, physical and
emotional development" of its students, the statement lacks the
specificity necessary to form the basis of clear and measurable
performance criteria. Clarifying the mission of the University in
terms of specific performance objectives and developmental goals for
students is an essential prerequisite to an integrated,
learning-based academic pro-gram.
C. EXAMPLE PROGRAMS
BASED ON STUDENT LEARNING OUTCOMESPioneering efforts in
assessment and student learning have been made at several colleges.
While there may be little direct transferability between the paths
these schools have followed and the path that will be chosen at
Western, the experiences of these schools are nevertheless
instructive. They provide useful maps of approaches that work.
1. Alverno
College
Alverno College is a
small, independent, four-year liberal arts college for women,
located in Milwaukee, and is widely recognized for its pioneering
work in assessment. Over the last twenty-five years the Alverno
faculty has developed a highly sophisticated system of
student-assessment-as-learning and
assessment-through-the-curriculum, for which it has received
explicit recognition and considerable financial support from
numerous foundations (Alverno, 1994).
Over many years of
work, Alverno has defined eight measurable Abilities that a
successful liberal arts education should develop: Communication,
Analysis, Problem Solving, Valuing in Decision-Making, Social
Interaction, Global Perspectives, Effective Citizenship, and
Aesthetic Responsiveness. Each of these Abilities has in turn been
divided into eight developmental levels--generally ranging from
fundamental identification at the first level to integrated
application at the highest level.
Abilities have three
important characteristics. They are: integrated, involving an
integrated set of skills; developmental, implying an increasingly
complex hierarchy of processes; and transferable, broadly useful and
applicable across the student's future roles and settings. Every
course at Alverno defines two specific sets of learning objectives;
the first pertains to the levels of traditional knowledge and skills
associated with the course; the second pertains to the Abilities
addressed in the course.
Therefore, to pass a
course, students must demonstrate not only appropriate mastery of
course material by doing something, they must also demonstrate
mastery of the Abilities by how they do it. For example, a math
course would not only have specific math skills students must
demonstrate, it might also have specific levels of Communication,
Analysis, and Problem Solving Abilities the student must demonstrate
as well, and which instructors have agreed to assess.
2. King's
College
King's College is a
small, Catholic, liberal arts college. Its comprehensive assessment
program tracks the development in all students of a series of
transferable skills, derived from its mission statement, which quite
specifically articulates the College's responsibilities for student
development (King's, 1999).
King's has a CORE
Curriculum which "focuses in a deliberate and systematic manner the
skills of liberal learning: Critical Thinking, Effective Writing,
Effective Oral Communication, Library and Information Literacy,
Computer Competence, Creative Thinking and Problem Solving,
Quantitative Reasoning, and Moral Reasoning." In addition, "each
department. . .defines each transferable skill within the context of
the major and then divides the skill into specific competencies..."
King's College also
incorporates two integrative projects into all student programs of
study: the Diagnostic Project, and the Senior Integrated Assessment.
The Sophomore-Junior
Diagnostic Project: "Each department or program designs a screening
exercise to determine each student's ability to transfer critical
thinking and effective communication to an appropriate project
related to the major field of study. Faculty interact with students
throughout the project and share results with them. If the proper
level of skill is not apparent, the student is referred to an
appropriate office (such as the Learning Skills Center) for
assistance. The process also evaluates the student's likelihood of
success in the major."
The Senior Integrated
Assessment: "Each department or program designs an exercise, usually
in the context of a required senior course, a capstone seminar, or a
project, to allow the faculty and student to examine the latter's
success in integrating learning in the major with advanced levels of
the transferable skills of liberal learning."
3. California State
University, Chico
The approach adopted by
the College of Behavioral and Social Sciences at California State
University at Chico provides an appealing example of an approach to
assessing student learning in an institution much like Western. Its
approach is rooted in two basic premises: first, faculty in the unit
have particularly high teaching loads, and therefore no extra time;
and second, faculty wanted to be able to demonstrate the
effectiveness of their own programs on their own terms--given
budgetary uncertainties (Jacob, 1998).
The plan was divided
into three stages. The first step was for each department to engage
in a dialogue about what it means to be a major in this department,
and what should a major in this department know; that is, define
departmental learning objectives. As obvious as these questions are,
what was learned in each department was often a revelation.
The second step asked
faculty to link those learning objectives with the learning
processes and experiences which would lead to the desired learning
outcomes. This requires departments to address objectives
explicitly, and to consider dropping courses which meet no learning
objectives.
The third step was to
identify and implement assessment procedures, which is still in
progress. The plan has resulted in the adoption of a wide range of
assessment tools and a spate of curricular reforms in nearly all
departments. While the search for better assessment tools continues,
the next step will involve exchanging ideas among units, perhaps
focusing discussions on identifying some set of "best practices" in
assessment. The experience at Chico State demonstrates that a simple
plan can be highly beneficial, and that program benefits begin to
accrue as soon as dialogue begins at the departmental level about
student learning, curricular goals, and assessment practices. The
all-important first step is to open a dialogue about student
learning and curricular objectives.
Top
of Document
IV. TOWARD A
CURRICULUM BASED ON STUDENT LEARNING
A.
BACKGROUNDSince the inception of
state accountability reporting requirements over ten years ago,
Western has created and maintained extensive databases and developed
analytical capabilities in assessment and in survey research. The
Office of Institutional Assessment and Testing (OIAT) and the Office
of Survey Research have generated scores of reports on student
attitudes, behaviors, and performance, and their relationship to
program effectiveness, in the form of entering and graduating
student profiles, alumni satisfaction surveys, employer satisfaction
surveys, and program reviews.
State reporting
requirements have recently been extended to include additional
performance targets, and recent communications from the Washington
State Higher Education Coordinating Board suggest that additional
reporting requirements regarding student learning outcomes, at the
level of academic units, will very likely also be required in the
near future.
At the same time, the
accreditation review process has also increased its emphasis on
assessment of student learning. Ewell (1997) has suggested that,
since nearly universal accountability requirements duplicate many of
the traditional elements of peer review, accreditation review should
narrow its focus to core academic processes, which would or could
include the integration of student learning outcomes into curricula,
and the incorporation of best teaching and learning practices, such
as the Seven Principles, into academic programs. Western's recent
Accreditation Review is evidence that such a shift is already
happening. Its recommendation that Western's academic units must be
more actively involved in assessment is entirely consistent both
with this new role for accreditation and with a shift in mission and
policy toward student learning.
Referring back to
Figures 1 and 2, what all of this means is that assessment to date
has been largely driven by the regulatory process; assessment has
been about accountability, fiscal efficiency, and resource
allocation. These trends are not going to go away, although they are
likely to continue to evolve. However, recent evidence suggests a
growing convergence of accountability requirements and accreditation
requirements regarding the central importance of student learning
outcomes as measures of institutional performance.
Top
of Document
B. ROLE OF THE OFFICE
OF INSTITUTIONAL ASSESSMENT AND TESTING
(OIAT)OIAT maintains
extensive databases on student characteristics and student outcomes.
This includes data from the Student Tracking System maintained by
the Registrar's Office, for information about student backgrounds,
enrollment history, coursework, grades, and majors, and information
from a variety of student, alumni, faculty, and employer surveys.
In addition, Western
has participated intermittently for many years in the Cooperative
Institutional Research Project (CIRP), a comprehensive freshman
survey developed and administered through the Higher Education
Research Institute at UCLA, and its corresponding senior survey, the
College Student Survey (CSS). Collectively, these surveys generate
detailed longitudinal information on student goals, behaviors,
activities, expectations, and values, both as they enter Western and
as they graduate.
In early 1998 OIAT
assembled departmental information from a number of recent survey
instruments, including the CSS, and provided summaries of this data
to department chairs. Such reports could be made even more valuable
if they were constructed with substantial input from the academic
units themselves, and if databases were expanded to facilitate
analysis on a departmental level.
Although the CIRP has
been regularly administered to entering freshmen in recent years,
administration of the CSS has been limited, with sample sizes too
small to permit useful inferences about individual programs.
Beginning this academic year, however, OIAT plans to expand the CSS,
providing comparable entry and exit surveys on all native students,
so that a comprehensive longitudinal database can be formed, from
which to assess student development while at Western and beyond.
Development of a similar entering survey instrument for transfer
students is underway.
Expansion of these
capabilities is specifically designed to provide assessment data to
individual academic programs about the impacts of their programs on
student development. Academic units are invited to write for these
surveys a number of tailored questions which are of particular
interest to them about their students' experiences with their
programs. Applying a variety of statistical techniques including
frequency analysis, cross-tabulations, analysis of variance, block
regression, and factor analysis, OIAT will be able to investigate an
extensive array of impacts of Western and its programs on student
learning and development.
C. THE ROLE OF ACADEMIC
UNITS
In the future the
capability for department-specific reporting can be expanded, and
tailored to meet data needs of individual programs. First, however,
academic departments must examine their own missions with regard to
student learning objectives, and how they want to measure their
success at accomplishing them, so that appropriate data can be
developed.
There are essentially
three kinds of questions academic units must investigate. First,
what kinds of affective and cognitive outcomes are essential goals
of their programs. ". . .(W)hat should their graduates be able to
know, think, do, believe, or value?" (Peterson and Hayward, 1989.)
Second, how are those
outcomes to be measured in ways that provide meaningful feedback
about program effectiveness? "It is not unusual for lofty goals to
be identified that are not really taught. Special attention should
be given to ways in which connections are made among goals and
elements of the curriculum." (Gentemann, 1994.)
And third, how will the
various academic units incorporate the best practices in teaching
and assessment into their programs in ways that enhance student
learning and that are truly valuable and useful, or "authentic":
"authentic
achievement defines significant intellectual accomplishment by
adults as construction of knowledge through disciplined inquiry to
produce discourse, products, or performances that have meaning or
value beyond success in school. . .but this 'real world' dimension
constitutes only one of three criteria for authentic intellectual
work; the other two insist on construction of knowledge through
disciplined inquiry--both of which pay significant attention to
students' basic knowledge and skills." (Newmann,
1998.)
Faculty from Alverno
College have made it clear that their twenty-five year pioneering
struggle with these issues has been a difficult one. However, they
suggest (Alverno, 1998) that their program began modestly, with a
commitment to student learning as their common goal. This commitment
was reinforced by a President who provided and enforced an action
deadline for the inception of their new program, ready or not. Then,
through much dialogue over many years, they were able to identify
the seven Abilities, to define levels of those Abilities they could
all agree upon, and to reorganize their academic programs and
infrastructure around learning.
Similar experiences
have been reported wherever this inquiry has been undertaken. This
is the fundamental value of assessment in practice; learning about a
thing is the inevitable result of attending to it, and improvement
is the inevitable result of learning.
Top
of Document
CONCLUSIONSThere is ample evidence
to suggest that reorienting Western's educational policies and
practices toward the improvement of student learning outcomes would,
over time, significantly improve the quality of education of Western
students and graduates.
Such a reorientation
would necessarily be an ongoing process; over time it would likely
constitute a quantum shift in our approach to education. It would
probably imply changes over time in our mission and goals, in the
structure of our curricula, in assessment procedures from the
classroom on up, in the responsibilities of faculty, staff, and
administrators, and in the organizational structure of the
University. However, all of these are the kinds of changes which can
evolve in an organic way specific to Western and its community of
students, faculty, staff, and administrators. The important thing is
to begin the process, and to allow it to develop.
RECOMMENDATIONSA true commitment to
student learning is a paradigm shift, but it doesn't have to happen
all at once. The first recommendation--the all-important first
step--is to initiate a campus-wide exploration and discussion
of whether and how to redefine Western's mission and goals to
reflect a commitment to excellence in student learning, and to
define strategies for achieving such goals. Faculty within academic
units must bear a particular responsibility for beginning a dialogue
about their own major programs, examining their willingness and
ability to restructure their programs, courses, and assessment
procedures to be consistent with improving learning outcomes. They
must be willing to ask the three questions posed at Chico State: 1)
What should our majors know; 2) How can they best learn these
things; and 3) How can we measure our success at teaching them?
The second
recommendation is to establish some kind of "Faculty Development
Center," which would provide confidential consultations,
resource and technical support, and training to help faculty develop
as teachers. Such an office could be an extension of the new Center
for Instructional Innovation, or it could be modeled after the
Learning Resources Unit at BCIT mentioned in Section 2, which
provides a wide range of support services, including course
development, definition of course objectives, assessment
alternatives, and skills development. We should want to provide
explicit support to improve both the quality of teaching and also
the productivity of individual faculty, and to provide incentives
for teaching excellence.
To obtain a list of
references used for this issue of Dialogue, download the list from
the Assessment website.
Click the STUDENT LEARNING button.
Richard Frye, Ph.D., is
a Planning Analyst for the Office of Institutional Assessment and
Testing at Western Washington University, Bellingham, WA. He is a
1967 graduate of the United States Naval Academy and earned his
Ph.D. in economics from the University of Rhode Island in 1975. His
e-mail address is richf@cms.wwu.edu; his telephone
number is (360) 650-3905.
published by Office of Institutional Assessment and
Testing Dr. Joseph E. Trimble, Director; Gary R. McKinney,
General Editor technical assistance by Center for Instructional
Innovation Dr. Kris Bulcroft, Director; Web Design by Karen
Casto
Dialogue Home
| Institutional Assessment
Home | Center for
Instructional Innovation Home | Western Home
|